Principle Of Transformation Groups
   HOME

TheInfoList



OR:

The principle of transformation groups is a rule for assigning ''epistemic'' probabilities in a statistical inference problem. It was first suggested by
Edwin T. Jaynes Edwin Thompson Jaynes (July 5, 1922 – April 30, 1998) was the Wayman Crow Distinguished Professor of Physics at Washington University in St. Louis. He wrote extensively on statistical mechanics and on foundations of probability and statist ...
and can be seen as a generalisation of the
principle of indifference The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their cre ...
. This can be seen as a method to create ''objective ignorance probabilities'' in the sense that two people who apply the principle and are confronted with the same information will assign the same probabilities.


Motivation and description of the method

The method is motivated by the following normative principle, or desideratum: ''In two problems where we have the same prior information we should assign the same prior probabilities'' The method then comes about from "transforming" a given problem into an equivalent one. This method has close connections with
group theory In abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen ...
, and to a large extent is about finding symmetry in a given problem, and then exploiting this symmetry to assign prior probabilities. In problems with discrete variables (e.g. dice, cards,
categorical data In statistics, a categorical variable (also called qualitative variable) is a variable that can take on one of a limited, and usually fixed, number of possible values, assigning each individual or other unit of observation to a particular group or ...
) the principle reduces to the
principle of indifference The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their cre ...
, as the "symmetry" in the discrete case is a permutation of the labels, that is the
permutation group In mathematics, a permutation group is a group ''G'' whose elements are permutations of a given set ''M'' and whose group operation is the composition of permutations in ''G'' (which are thought of as bijective functions from the set ''M'' to ...
is the relevant transformation group for this problem. In problems with continuous variables, this method generally reduces to solving a
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, ...
. Given that differential equations do not always lead to unique solutions, this method cannot be guaranteed to produce a unique solution. However, in a large class of the most common types of parameters it does lead to unique solutions (see the examples below)


Examples


Discrete case: coin flipping

Consider a problem where all you are told is that there is a coin, and it has a head (H) and a tail (T). Denote this information by ''I''. You are then asked "what is the probability of Heads?". Call this ''problem 1'' and denote the probability ''P(H, I)''. Consider another question "what is the probability of Tails?". Call this ''problem 2'' and denote this probability by ''P(T, I)''. Now from the information that was actually in the question, there is no distinction between heads and tails. The whole paragraph above could be re-written with "Heads" and "Tails" interchanged, and "H" and "T" interchanged, and the problem statement would not be any different. Using the desideratum then demands that :P(H, I)=P(T, I) The probabilities must add to 1, this means that :P(H, I)+P(T, I)=1 \rightarrow 2 P(H, I)=1 \rightarrow P(H, I)=0.5 . Thus we have a unique solution. This argument easily extents to ''N'' categories, to give the "flat" prior probability ''1/N''. This provides a ''consistency''-based argument for the principle of indifference, which goes as follows: ''if someone is truly ignorant about a discrete/countable set of outcomes apart from their potential existence, but does not assign them equal prior probabilities, then they are assigning different probabilities when given the same information''. This can be alternatively phrased as: ''a person who does not use the principle of indifference to assign prior probabilities to discrete variables, is either not ignorant about them, or reasoning inconsistently''.


Continuous case: location parameter

This is the easiest example for continuous variables. It is given by stating one is "ignorant" of the location parameter in a given problem. The statement that a parameter is a "location parameter" is that the sampling distribution, or likelihood of an observation ''X'' depends on a parameter \mu only through the difference :p(X, \mu,I)=f(X-\mu) for some normalised, but otherwise arbitrary distribution ''f(.)''. Note that the given information that ''f(.)'' is a normalised distribution is a significant prerequisite to obtaining the final conclusion of a uniform prior; because uniform probability distributions can only be normalised given a finite input domain. In other words, the assumption that ''f(.)'' is normalized is implicitly also requiring that the location parameter \mu does not extend to infinity in any of its dimensions. Otherwise, the uniform prior would not be normalisable. Examples of location parameters include the mean parameter of a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu ...
with known variance and median parameter of a
Cauchy distribution The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) fun ...
with known inter-quartile range. The two "equivalent problems" in this case, given one's knowledge of the sampling distribution p(X, \mu,I)=f(X-\mu), but no other knowledge about \mu, is simply given by a "shift" of equal magnitude in ''X'' and \mu. This is because of the relation: : f(X-\mu)=f( +b mu+b=f(X^-\mu^) So simply "shifting" all quantities up by some number ''b'' and solving in the "shifted space" and then "shifting" back to the original one should give exactly the same answer as if we just worked on the original space. Making the transformation from \mu to \mu^ has a
Jacobian In mathematics, a Jacobian, named for Carl Gustav Jacob Jacobi, may refer to: * Jacobian matrix and determinant * Jacobian elliptic functions * Jacobian variety *Intermediate Jacobian In mathematics, the intermediate Jacobian of a compact Kähle ...
of simply 1, and so the prior probability g(\mu) = p(\mu, I) must satisfy the functional equation: :g(\mu)=\left, \ g(\mu^) = g(\mu+b) And the only function that satisfies this equation is the "constant prior": :p(\mu, I) \propto 1 Thus the uniform prior is justified for expressing complete ignorance of a normalized prior distribution on a finite, continuous location parameter.


Continuous case: scale parameter

As in the above argument, a statement that \sigma is a scale parameter means that the sampling distribution has the functional form: :p(X, \sigma,I)=f\left(\right) Where, as before ''f(.)'' is a normalised probability density function. The requirement that probabilities be finite and positive forces the condition \sigma>0. Examples include the standard deviation of a normal distribution with known mean or the
gamma distribution In probability theory and statistics, the gamma distribution is a two- parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma di ...
. The "symmetry" in this problem is found by noting that := ; a>0 and setting X^ = Xa and \sigma^ = \sigma a. But, unlike in the location parameter case, the Jacobian of this transformation in the sample space and the parameter space is ''a'', not 1. So the sampling probability changes to: :p(X^, \sigma,I)= \cdot f\left(\right)= f\left(\right) Which is invariant (i.e. has the same form before and after the transformation), and the prior probability changes to: :p(\sigma, I)= p(\sigma^, I)=p\left(, I\right) Which has the unique solution (up to a proportionality constant): :p(\sigma, I) \propto \rightarrow p(\log(\sigma), I) \propto 1 Which is the well-known
Jeffreys prior In Bayesian probability, the Jeffreys prior, named after Sir Harold Jeffreys, is a non-informative (objective) prior distribution for a parameter space; its density function is proportional to the square root of the determinant of the Fisher info ...
for scale parameters, which is "flat" on the log scale, although it is derived using a different argument to that here, based on the
Fisher information In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that model ...
function. The fact that these two methods give the same results in this case does not imply it in general.


Continuous case: Bertrand's paradox

Edwin Jaynes used this principle to provide a resolution to Bertrand's Paradox by stating his ignorance about the exact position of the circle. The details are available in the reference or in the link.


Discussion

This argument depends crucially on ''I''; changing the information may result in a different probability assignment. It is just as crucial as changing axioms in deductive logic—small changes in the information can lead to large changes in the probability assignments allowed by "consistent reasoning". To illustrate, suppose that the coin flipping example also states as part of the information that the coin has a side (S) (i.e. it is a ''real coin''). Denote this new information by ''N''. The same argument using "complete ignorance", or more precisely, the information actually described, gives: :P(H, I,N)=P(T, I,N)=P(S, I,N)=1/3 But this seems absurd to most people—intuition tells us that we should have P(S) very close to zero. This is because most people's intuition do not see "symmetry" between a coin landing on its side compared to landing on heads. Our intuition says that the particular "labels" actually carry some information about the problem. A simple argument could be used to make this more formal mathematically (e.g. the physics of the problem make it difficult for a flipped coin to land on its side)—we make a distinction between "thick" coins and "thin" coins (here thickness is measured relative to the coin's diameter). It could reasonably be assumed that: :P(S, \text) \neq P(S, \text) Note that this new information probably wouldn't break the symmetry between "heads" and "tails", so ''that'' permutation would still apply in describing "equivalent problems", and we would require: :P(T, \text) = P(H, \text) \neq P(H, \text)=P(T, \text) This is a good example of how the principle of transformation groups can be used to "flesh out" personal opinions. All of the information used in the derivation is explicitly stated. If a prior probability assignment doesn't "seem right" according to what your intuition tells you, then there must be some "background information" that has not been put into the problem. It is then the task to try and work out what that information is. In some sense, by combining the method of transformation groups with one's intuition can be used to "weed out" the actual assumptions one has. This makes it a very powerful tool for prior elicitation. Introducing the thickness of the coin as a variable is permissible because its existence was implied (by being a real coin) but its value was not specified in the problem. Introducing a "nuisance parameter" and then making the answer invariant to this parameter is a very useful technique for solving supposedly "ill-posed" problems like Bertrand's Paradox. This has been called "the well-posing strategy" by some. The real power of this principle lies in its application to continuous parameters, where the notion of "complete ignorance" is not so well defined as in the discrete case. However, if applied with infinite limits, it often gives
improper prior In Bayesian statistical inference, a prior probability distribution, often simply called the prior, of an uncertain quantity is the probability distribution that would express one's beliefs about this quantity before some evidence is taken into ...
distributions. Note that the discrete case for a countably infinite set, such as (0,1,2,...) also produces an improper discrete prior. For most cases where the likelihood is sufficiently "steep" this does not present a problem. However, in order to be absolutely sure to avoid incoherent results and paradoxes, the prior distribution should be approached via a well defined and well behaved limiting process. One such process is the use of a sequence of priors with increasing range, such as f(M) = where the limit b \rightarrow \infty is to be taken ''at the end of the calculation'' i.e. after the normalisation of the posterior distribution. What this effectively is doing, is ensuring that one is taking the limit of the ratio, and not the ratio of two limits. See Limit of a function#Properties for details on limits and why this order of operations is important. If the limit of the ratio does not exist or diverges, then this gives an improper posterior (i.e. a posterior that does not integrate to one). This indicates that the data are so uninformative about the parameters that the prior probability of arbitrarily large values still matters in the final answer. In some sense, an improper posterior means that the information contained in the data has not "ruled out" arbitrarily large values. Looking at the improper priors this way, it seems to make some sense that "complete ignorance" priors should be improper, because the information used to derive them is so meager that it cannot rule out absurd values on its own. From a state of complete ignorance, only the data or some other form of additional information can rule out such absurdities.


Notes


References

* Edwin Thompson Jaynes. ''Probability Theory: The Logic of Science''. Cambridge University Press, 2003. {{ISBN, 0-521-59271-2. Principles